Despite excellent performance in image generation, Generative Adversarial Networks (GANs) are notorious for its requirements of enormous storage and intensive computation. As an awesome ''performance maker'', knowledge distillation is demonstrated to be particularly efficacious in exploring low-priced GANs. In this paper, we investigate the irreplaceability of teacher discriminator and present an inventive discriminator-cooperated distillation, abbreviated as DCD, towards refining better feature maps from the generator. In contrast to conventional pixel-to-pixel match methods in feature map distillation, our DCD utilizes teacher discriminator as a transformation to drive intermediate results of the student generator to be perceptually close to corresponding outputs of the teacher generator. Furthermore, in order to mitigate mode collapse in GAN compression, we construct a collaborative adversarial training paradigm where the teacher discriminator is from scratch established to co-train with student generator in company with our DCD. Our DCD shows superior results compared with existing GAN compression methods. For instance, after reducing over 40x MACs and 80x parameters of CycleGAN, we well decrease FID metric from 61.53 to 48.24 while the current SoTA method merely has 51.92. This work's source code has been made accessible at https://github.com/poopit/DCD-official.
translated by 谷歌翻译
This paper proposes a content relationship distillation (CRD) to tackle the over-parameterized generative adversarial networks (GANs) for the serviceability in cutting-edge devices. In contrast to traditional instance-level distillation, we design a novel GAN compression oriented knowledge by slicing the contents of teacher outputs into multiple fine-grained granularities, such as row/column strips (global information) and image patches (local information), modeling the relationships among them, such as pairwise distance and triplet-wise angle, and encouraging the student to capture these relationships within its output contents. Built upon our proposed content-level distillation, we also deploy an online teacher discriminator, which keeps updating when co-trained with the teacher generator and keeps freezing when co-trained with the student generator for better adversarial training. We perform extensive experiments on three benchmark datasets, the results of which show that our CRD reaches the most complexity reduction on GANs while obtaining the best performance in comparison with existing methods. For example, we reduce MACs of CycleGAN by around 40x and parameters by over 80x, meanwhile, 46.61 FIDs are obtained compared with these of 51.92 for the current state-of-the-art. Code of this project is available at https://github.com/TheKernelZ/CRD.
translated by 谷歌翻译
合成健康数据在共享数据以支持生物医学研究和创新医疗保健应用的发展时有可能减轻隐私问题。基于机器学习,尤其是生成对抗网络(GAN)方法的现代方法生成的现代方法继续发展并表现出巨大的潜力。然而,缺乏系统的评估框架来基准测试方法,并确定哪些方法最合适。在这项工作中,我们引入了一个可推广的基准测试框架,以评估综合健康数据的关键特征在实用性和隐私指标方面。我们将框架应用框架来评估来自两个大型学术医疗中心的电子健康记录(EHRS)数据的合成数据生成方法。结果表明,共享合成EHR数据存在公用事业私人关系权衡。结果进一步表明,在每个用例中,在所有标准上都没有明确的方法是最好的,这使得为什么需要在上下文中评估合成数据生成方法。
translated by 谷歌翻译
虽然机器学习(ML)在过去十年中取得了巨大进展,但最近的研究表明,ML模型易受各种安全和隐私攻击的影响。到目前为止,这场领域的大部分攻击都专注于由分类器代表的歧视模型。同时,一点关注的是生成模型的安全性和隐私风险,例如生成的对抗性网络(GANS)。在本文中,我们提出了对GANS的第一组培训数据集财产推论攻击。具体地,对手旨在推断宏观级训练数据集属性,即用于训练目标GaN的样本的比例,用于某个属性。成功的财产推理攻击可以允许对手来获得目标GaN的训练数据集的额外知识,从而直接违反目标模型所有者的知识产权。此外,它可以用作公平审计员,以检查目标GAN是否接受偏置数据集进行培训。此外,财产推理可以用作其他高级攻击的构建块,例如隶属推断。我们提出了一般的攻击管道,可以根据两个攻击场景量身定制,包括全黑盒设置和部分黑盒设置。对于后者,我们介绍了一种新颖的优化框架来增加攻击效果。在五个房产推理任务上超过四个代表性GaN模型的广泛实验表明我们的攻击实现了强大的表现。此外,我们表明我们的攻击可用于增强隶属推断对GANS的绩效。
translated by 谷歌翻译
最近,已探索了一系列算法,用于GaN压缩,旨在在部署资源受限的边缘设备上的GAN时减少巨大的计算开销和内存使用。然而,大多数现有的GaN压缩工作仅重点介绍如何压缩发电机,而未能考虑鉴别者。在这项工作中,我们重新审视鉴别者在GaN压缩中的作用和设计一种用于GAN压缩的新型发电机 - 鉴别器协作压缩方案,称为GCC。在GCC中,选择性激活鉴别器根据局部容量约束和全局协调约束自动选择和激活卷积通道,这有助于在对策训练期间与轻质发电机保持纳什平衡,避免模式塌陷。原始发电机和鉴别器也从头开始优化,作为教师模型,逐步优化修剪的发生器和选择性激活鉴别器。一种新的在线协同蒸馏方案旨在充分利用教师发生器和鉴别器的中间特征,以进一步提高轻质发电机的性能。对各种GAN的一代任务的广泛实验证明了GCC的有效性和泛化。其中,GCC有助于降低80%的计算成本,同时在图像转换任务中保持相当的性能。我们的代码和模型可在https://github.com/sjleo/gcc上使用。
translated by 谷歌翻译
超级分辨率(SR)是低级视觉区域的基本和代表任务。通常认为,从SR网络中提取的特征没有特定的语义信息,并且网络只能从输入到输出中学习复杂的非线性映射。我们可以在SR网络中找到任何“语义”吗?在本文中,我们为此问题提供了肯定的答案。通过分析具有维度降低和可视化的特征表示,我们成功地发现了SR网络中的深度语义表示,\ Texit {i.},深度劣化表示(DDR),其与图像劣化类型和度数相关。我们还揭示了分类和SR网络之间的表示语义的差异。通过广泛的实验和分析,我们得出一系列观测和结论,对未来的工作具有重要意义,例如解释低级CNN网络的内在机制以及开发盲人SR的新评估方法。
translated by 谷歌翻译
在许多机器学习应用中,在许多移动或物联网设备上生成大规模和隐私敏感数据,在集中位置收集数据可能是禁止的。因此,在保持数据本地化的同时估计移动或物联网设备上的参数越来越吸引人。这种学习设置被称为交叉设备联合学习。在本文中,我们提出了第一理论上保证的跨装置联合学习设置中的一般Minimax问题的算法。我们的算法仅在每轮训练中只需要一小部分设备,这克服了设备的低可用性引入​​的困难。通过在与服务器通信之前对客户端执行多个本地更新步骤,并利用全局梯度估计来进一步减少通信开销,并利用全局梯度估计来校正由数据异质性引入的本地更新方向上的偏置。通过基于新型潜在功能的开发分析,我们为我们的算法建立了理论融合保障。 AUC最大化,强大的对抗网络培训和GAN培训任务的实验结果展示了我们算法的效率。
translated by 谷歌翻译
In this paper, we investigate the impact of diverse user preference on learning under the stochastic multi-armed bandit (MAB) framework. We aim to show that when the user preferences are sufficiently diverse and each arm can be optimal for certain users, the O(log T) regret incurred by exploring the sub-optimal arms under the standard stochastic MAB setting can be reduced to a constant. Our intuition is that to achieve sub-linear regret, the number of times an optimal arm being pulled should scale linearly in time; when all arms are optimal for certain users and pulled frequently, the estimated arm statistics can quickly converge to their true values, thus reducing the need of exploration dramatically. We cast the problem into a stochastic linear bandits model, where both the users preferences and the state of arms are modeled as {independent and identical distributed (i.i.d)} d-dimensional random vectors. After receiving the user preference vector at the beginning of each time slot, the learner pulls an arm and receives a reward as the linear product of the preference vector and the arm state vector. We also assume that the state of the pulled arm is revealed to the learner once its pulled. We propose a Weighted Upper Confidence Bound (W-UCB) algorithm and show that it can achieve a constant regret when the user preferences are sufficiently diverse. The performance of W-UCB under general setups is also completely characterized and validated with synthetic data.
translated by 谷歌翻译
The Super-Resolution Generative Adversarial Network (SR-GAN) [1] is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGANnetwork architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN [2] to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge 1 [3]. The code is available at https://github.com/xinntao/ESRGAN.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译